Dynamic tunneling technique for efficient training of multilayer perceptrons
نویسندگان
چکیده
A new efficient computational technique for training of multilayer feedforward neural networks is proposed. The proposed algorithm consists two learning phases. The first phase is a local search which implements gradient descent, and the second phase is a direct search scheme which implements dynamic tunneling in weight space avoiding the local trap thereby generates the point of next descent. The repeated application of these two phases alternately forms a new training procedure which results into a global minimum point from any arbitrary initial choice in the weight space. The simulation results are provided for five test examples to demonstrate the efficiency of the proposed method which overcomes the problem of initialization and local minimum point in multilayer perceptrons.
منابع مشابه
Efficient training of multilayer perceptrons using principal component analysis.
A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to the technique of principal component analysis. The latter is performed with respect to a correlation matrix computed from the example inputs and their target outputs. Typical properties of the training procedure are investigated by means of a statistical physics analysis in models of learning re...
متن کاملTraining Multilayer Perceptrons with the Extende Kalman Algorithm
A large fraction of recent work in artificial neural nets uses multilayer perceptrons trained with the back-propagation algorithm described by Rumelhart et. a1. This algorithm converges slowly for large or complex problems such as speech recognition, where thousands of iterations may be needed for convergence even with small data sets. In this paper, we show that training multilayer perceptrons...
متن کاملFeature Selection Using a Multilayer Perceptron
The problem of selecting the best set of features for target recognition using a multilayer perceptron is addressed in this paper. A technique has been developed which analyzes the weights in a multilayer perceptron to determine which features the network finds important and which are unimportant. A brief introduction to the use of multilayer perceptrons for classification and the training rule...
متن کاملGeometrical Initialization, Parametrization and Control of Multilayer Perceptrons : Application to Function Approximation 1
| This paper proposes a new method to reduce training time for neural nets used as function approximators. This method relies on a geometrical control of Multilayer Perceptrons (MLP). A geometrical initializa-tion gives rst better starting points for the learning process. A geometrical parametriza-tion achieves then a more stable convergence. During the learning process, a dynamic geometrical c...
متن کاملGeometrical Initialization, Parametrization and Control of Multilayer Perceptrons: Application to Function Approximation
This paper proposes a new method to reduce training time for neural nets used as function approximators. This method relies on a geometrical control of Multilayer Perceptrons (MLP). A geometrical initialization gives first better starting points for the learning process. A geometrical parametrization achieves then a more stable convergence. During the learning process, a dynamic geometrical con...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- IEEE transactions on neural networks
دوره 10 1 شماره
صفحات -
تاریخ انتشار 1999